POV-Ray : Newsgroups : povray.general : contemporary photorealism : contemporary photorealism Server Time
2 Aug 2024 08:10:31 EDT (-0400)
  contemporary photorealism  
From: Paris
Date: 14 Dec 2004 16:15:01
Message: <web.41bf577e1db2d3fb765651f90@news.povray.org>
Pov-Ray is lagging farther and farther behind commercial rendering software
in terms of photo-realism.   There are many reasons why this is the case,
but some reasons are more evident and more easily solved than others.

1.  The phong model is outdated now.   The next release of pov-ray should
use physically-based BRDFs, and only keep the phong model around for
compatability.  Phong makes everything look like plastic, including what we
have been calling "glass".   The difference between pov-ray glass and
physically-based glass in other packages is STRIKING.  Real glass has a
fresnel effect, where shallow-angled light reflected from the surface tends
more towards a perfect mirror.  This also happens in real life.  My
suggestion is allow BRDFS for those who need them, and base the
documentation around the Phong model as usual.

2.  Pov-ray does not have hair, fuzz, fur, or suede textures.  Brushed metal
would be nice also. And car paint too...  I won't ask for
subsurfacescattered flesh just yet, it tends to be very time-consuming to
implement.   There are many textures out there that can only be implemented
using path-tracing techniques, such as very shiney, partially-reflective
gold.  A few others are glossy reflections (blurred reflections) and
frosted glass.

3.  Povray uses distributed ray-tracing to simulate global illumination.
Reflected parts of the scene do not have the radiosity calculation
performed on them.  I have found this to be highly frustrating.  (Simply
create a radiosity room and stick a large reflecting ball in the middle to
see what I mean.)   The speed-ups to this method leave even the most
advanced users scratching their heads.  Ive been using pov-ray since it was
named DKB-Trace, and honestly, I'm still not sure that "minimum_reuse"
means under the radiosity settings.  If you think about what distributed
ray-tracing does, you will notice it works by tracing rays into DARK PARTS
of the scene, hoping for a swath of light.   It doesnt take a professor to
realize that this is a wasted calculation.   Tracing a ray into a dark part
of the scene will mathematically never make a difference in the shaded
pixel.  There are methods which make an unhappy marriage between photon
shooting and "gathering" from those shoots in a way based off of importance
sampling.   By making the algorithm more complex, it tends to avoid WASTED
CALCULATIONS.  Also, these bi-directional algorithms (as they are called)
make the settings for the end-user very simple.  So we need not worry that
more complex algorithms will "confuse the end-user even more".

4.  There are other physically-based methods out there that turn ray-tracing
on its head.   I pretty much expect future versions of Pov-ray to move away
from the phong model (part 1) and implement a few BRDFs for popular
surfaces, but other methods would be nice to see also, which I have less
faith in.   There are ways to calculate light in rendering in which you do
not even use RGB color space.   These algorithms use spectral integration,
and create a large picture out of pixels that are colored with the
SPECTRUM, rather than RGB triples.   This kind of rendering allows you to
specify the wattage of incandescent lightbulbs, or even simulate flash
bulbs from particular cameras.  REALISTIC SUNLIGHT, of course, is the
biggest pay-off to this method.  Other freeware packages on the web that
attempt to do this are usually written by a single busy person, and they
are hopelessly buggy or just plain do not work.   This is the reason I have
come to this board to make this suggestion.  Pov-ray is the most robust and
stable free rendering software in the world.

5.  Even without spectral integration, you can render in RGB space and still
do EXPOSURE simulations.  (Usually its the case that exposure simulation is
not used unless a certain amount of "energy" is calculated to be passing
through the cameras' aperture, but it can be done ad hoc in RGB space also,
by fanagling.)  This basically works by storing floating point triples into
each pixel, none of which are CLIPPED or "tuned down"  to fit into  0.0 -->
1.0.    The idea is that even  in the wide amplitude of real light, you
always want your display adapter to use its contrast ratio to its maximum.
  A pass is performed over this final image, the workings of which are
controlled by the user.   The plugin that someone made to simulate this is
not robust enough.  The user must be allowed to map directly to floating
point triples, and then "slide" this window around on an image whose
contrast ratio is much larger than 0.0 to 1.0.    Automatically
"stretching" the mapping to fit the entire contrast of the triples is not
always what you want.

--
Paris


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.